dual role
Twins! Rivals! Clones! Hollywood is doubling down on dual roles
For years, dual roles have been played largely for laughs. Think of Adam Sandler's Razzie-sweeping twin turn in Jack and Jill, or Lisa Kudrow as both Phoebe and Ursula Buffay on Friends. Eddie Murphy was always particularly prolific, his most multiplicitous performance as a clutch of Klumps for Nutty Professor II. There are exceptions, of course. But for every Legend or The Prestige there are ten Austin Powers, Bowfingers and – shudder – Norbits.
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
On-Device LLMs for Home Assistant: Dual Role in Intent Detection and Response Generation
Birkmose, Rune, Reece, Nathan Mørkeberg, Norvin, Esben Hofstedt, Bjerva, Johannes, Zhang, Mike
This paper investigates whether Large Language Models (LLMs), fine-tuned on synthetic but domain-representative data, can perform the twofold task of (i) slot and intent detection and (ii) natural language response generation for a smart home assistant, while running solely on resource-limited, CPU-only edge hardware. We fine-tune LLMs to produce both JSON action calls and text responses. Our experiments show that 16-bit and 8-bit quantized variants preserve high accuracy on slot and intent detection and maintain strong semantic coherence in generated text, while the 4-bit model, while retaining generative fluency, suffers a noticeable drop in device-service classification accuracy. Further evaluations on noisy human (non-synthetic) prompts and out-of-domain intents confirm the models' generalization ability, obtaining around 80--86\% accuracy. While the average inference time is 5--6 seconds per query -- acceptable for one-shot commands but suboptimal for multi-turn dialogue -- our results affirm that an on-device LLM can effectively unify command interpretation and flexible response generation for home automation without relying on specialized hardware.
Zhang
User-item connected documents, such as customer reviews for specific items in online shopping website and user tips in location-based social networks, have become more and more prevalent recently. Inferring the topic distributions of user-item connected documents is beneficial for many applications, including document classification and summarization of users and items. While many different topic models have been proposed for modeling multiple text, most of them cannot account for the dual role of user-item connected documents (each document is related to one user and one item simultaneously) in topic distribution generation process. In this paper, we propose a novel probabilistic topic model called Prior-based Dual Additive Latent Dirichlet Allocation (PDA-LDA). It addresses the dual role of each document by associating its Dirichlet prior for topic distribution with user and item topic factors, which leads to a document-level asymmetric Dirichlet prior. In the experiments, we evaluate PDA-LDA on several real datasets and the results demonstrate that our model is effective in comparison to several other models, including held-out perplexity on modeling text and document classification application.